Array programming provides a powerful, compact, expressive syntax for accessing, manipulating, and operating on data in vectors, matrices, and higher-dimensional arrays [1]. NumPy is the primary array programming library for the Python language [2,3,4,5]. It plays an essential role in research analysis pipelines in fields as diverse as physics, chemistry, astronomy, geoscience, biology, psychology, material science, engineering, finance, and economics. For example, in astronomy, NumPy was an important part of the software stack used in the discovery of gravitational waves [6] and the first imaging of a black hole [7].Here we show how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring, and analyzing scientific data. NumPy is the foundation upon which the entire scientific Python universe is constructed. It is so pervasive that several projects, targeting audiences with specialized needs, have developed their own NumPy-like interfaces and array objects. Because of its central position in the ecosystem, NumPy increasingly plays the role of an interoperability layer between these new array computation libraries.
translated by 谷歌翻译
Strategic test allocation plays a major role in the control of both emerging and existing pandemics (e.g., COVID-19, HIV). Widespread testing supports effective epidemic control by (1) reducing transmission via identifying cases, and (2) tracking outbreak dynamics to inform targeted interventions. However, infectious disease surveillance presents unique statistical challenges. For instance, the true outcome of interest - one's positive infectious status, is often a latent variable. In addition, presence of both network and temporal dependence reduces the data to a single observation. As testing entire populations regularly is neither efficient nor feasible, standard approaches to testing recommend simple rule-based testing strategies (e.g., symptom based, contact tracing), without taking into account individual risk. In this work, we study an adaptive sequential design involving n individuals over a period of {\tau} time-steps, which allows for unspecified dependence among individuals and across time. Our causal target parameter is the mean latent outcome we would have obtained after one time-step, if, starting at time t given the observed past, we had carried out a stochastic intervention that maximizes the outcome under a resource constraint. We propose an Online Super Learner for adaptive sequential surveillance that learns the optimal choice of tests strategies over time while adapting to the current state of the outbreak. Relying on a series of working models, the proposed method learns across samples, through time, or both: based on the underlying (unknown) structure in the data. We present an identification result for the latent outcome in terms of the observed data, and demonstrate the superior performance of the proposed strategy in a simulation modeling a residential university environment during the COVID-19 pandemic.
translated by 谷歌翻译
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences. Fair ML has largely focused on the protection of single attributes in the simpler setting where both attributes and target outcomes are binary. However, the practical application in many a real-world problem entails the simultaneous protection of multiple sensitive attributes, which are often not simply binary, but continuous or categorical. To address this more challenging task, we introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces. This leads to two practical tools: first, the FairCOCCO Score, a normalised metric that can quantify fairness in settings with single or multiple sensitive attributes of arbitrary type; and second, a subsequent regularisation term that can be incorporated into arbitrary learning objectives to obtain fair predictors. These contributions address crucial gaps in the algorithmic fairness literature, and we empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
脑小血管疾病的成像标记提供了有关脑部健康的宝贵信息,但是它们的手动评估既耗时又受到实质性内部和间际变异性的阻碍。自动化评级可能受益于生物医学研究以及临床评估,但是现有算法的诊断可靠性尚不清楚。在这里,我们介绍了\ textIt {血管病变检测和分割}(\ textit {v textit {where valdo?})挑战,该挑战是在国际医学图像计算和计算机辅助干预措施(MICCAI)的卫星事件中运行的挑战(MICCAI) 2021.这一挑战旨在促进大脑小血管疾病的小而稀疏成像标记的自动检测和分割方法的开发,即周围空间扩大(EPVS)(任务1),脑微粒(任务2)和预先塑造的鞋类血管起源(任务3),同时利用弱和嘈杂的标签。总体而言,有12个团队参与了针对一个或多个任务的解决方案的挑战(任务1 -EPVS 4,任务2 -Microbleeds的9个,任务3 -lacunes的6个)。多方数据都用于培训和评估。结果表明,整个团队和跨任务的性能都有很大的差异,对于任务1- EPV和任务2-微型微型且对任务3 -lacunes尚无实际的结果,其结果尤其有望。它还强调了可能阻止个人级别使用的情况的性能不一致,同时仍证明在人群层面上有用。
translated by 谷歌翻译
黑色素瘤是一种严重的皮肤癌,在后期阶段高死亡率。幸运的是,当早期发现时,黑色素瘤的预后是有希望的,恶性黑色素瘤的发病率相对较低。结果,数据集严重不平衡,这使培训当前的最新监督分类AI模型变得复杂。我们建议使用生成模型来学习良性数据分布,并通过密度估计检测出分布(OOD)恶性图像。标准化流(NFS)是OOD检测的理想候选者,因为它们可以计算精确的可能性。然而,它们的感应偏见对明显的图形特征而不是语义上下文障碍障碍的OOD检测。在这项工作中,我们旨在将这些偏见与黑色素瘤的领域水平知识一起使用,以改善基于可能性的OOD检测恶性图像。我们令人鼓舞的结果表明,使用NFS检测黑色素瘤的可能性。我们通过使用基于小波的NFS,在接收器工作特性的曲线下,面积增加了9%。该模型需要较少的参数,以使其更适用于边缘设备。拟议的方法可以帮助医学专家诊断出皮肤癌患者并不断提高存活率。此外,这项研究为肿瘤学领域的其他领域铺平了道路,具有类似的数据不平衡问题\ footNote {代码可用:
translated by 谷歌翻译
可解释的人工智能(XAI)越来越多地用于分析神经网络的行为。概念激活使用人解剖概念来解释神经网络行为。这项研究旨在评估回归概念激活的可行性,以解释多模式体积数据的检测和分类。概念验证证明是在前列腺发射断层扫描/计算机断层扫描(PET/CT)成像的转移性前列腺癌患者中证明的。多模式的体积概念激活用于提供全球和局部解释。敏感性为80%,为每位患者的假阳性为1.78。全球解释表明,检测集中在CT上的解剖位置和PET上的检测信心。当地的解释显示出有望有助于区分真实积极因素和误报。因此,这项研究证明了使用回归概念激活来解释多模式体积数据的检测和分类的可行性。
translated by 谷歌翻译
颅内动脉瘤(UIA)的生长是破裂的预测指标。因此,为了进一步的成像监视和治疗计划,重要的是能够预测UIA是否会根据初始基线飞行时间MRA(TOF-MRA)增长。众所周知,UIA的大小和形状是动脉瘤生长和/或破裂的预测指标。我们对使用网状卷积神经网络进行基线TOF-MRA的未来UIA增长预测进行了可行性研究。我们包括151个TOF-MRA,其中169个UIA基于生长的临床定义,其中49个UIA被归类为生长,而120个UIA被归类为稳定(随访扫描中的大小> 1 mm)。从TOF-MRAS分割了UIA,并自动生成网格。我们研究了仅UIA网格的输入和包括UIA和周围母体血管在内的利益区域(ROI)网格。我们开发了一个分类模型来预测将增长或保持稳定的UIA。该模型由一个网状卷积神经网络组成,其中包括描述表面拓扑的形状指数和曲面的其他新型输入边缘特征。研究了输入边缘中点坐标是否影响模型性能。具有最高AUC(63.8%)的模型用于生长预测,使用了具有输入边缘中点坐标特征的UIA网格(平均F1得分= 62.3%,准确度= 66.9%,灵敏度= 57.3%,特异性= 70.8%)。我们提出了一个基于网状卷积神经网络的未来UIA增长预测模型,其结果有希望。
translated by 谷歌翻译
计算机辅助方法为诊断和预测脑疾病显示了附加的价值,因此可以支持临床护理和治疗计划中的决策。本章将洞悉方法的类型,其工作,输入数据(例如认知测试,成像和遗传数据)及其提供的输出类型。我们将专注于诊断的特定用例,即估计患者的当前“状况”,例如痴呆症的早期检测和诊断,对脑肿瘤的鉴别诊断以及中风的决策。关于预测,即对患者的未来“状况”的估计,我们将缩小用例,例如预测多发性硬化症中的疾病病程,并预测脑癌治疗后患者的结局。此外,根据这些用例,我们将评估当前的最新方法,并强调当前对这些方法进行基准测试的努力以及其中的开放科学的重要性。最后,我们评估了计算机辅助方法的当前临床影响,并讨论了增加临床影响所需的下一步。
translated by 谷歌翻译
最大化类之间的分离构成了机器学习中众所周知的归纳偏见和许多传统算法的支柱。默认情况下,深网不配备这种电感偏差,因此通过差异优化提出了许多替代解决方案。当前的方法倾向于共同优化分类和分离:将输入与类向量对齐,并角度分离载体。本文提出了一个简单的替代方法:通过在计算SoftMax激活之前添加一个固定的矩阵乘法,将最大分离作为网络中的电感偏差编码。我们方法背后的主要观察结果是,分离不需要优化,可以在训练之前以封闭形式解决并插入网络。我们概述了一种递归方法,以获取由任何数量类别的最大可分离矢量组成的矩阵,可以通过可忽略的工程工作和计算开销添加。尽管它的性质很简单,但这个矩阵乘法提供了真正的影响。我们表明,我们的建议直接提高分类,长尾识别,分布式检测和开放式识别,从CIFAR到Imagenet。我们从经验上发现,最大分离最有效地作为固定偏见。使矩阵可学习不会增加表现。在GitHub上,封闭形式的实现和代码是在GitHub上。
translated by 谷歌翻译